Proximal Gradient Algorithms Under Local Lipschitz Gradient Continuity
نویسندگان
چکیده
Composite optimization offers a powerful modeling tool for variety of applications and is often numerically solved by means proximal gradient methods. In this paper, we consider fully nonconvex composite problems under only local Lipschitz continuity the smooth part objective function. We investigate an adaptive scheme PANOC-type methods (Stella et al. in Proceedings IEEE 56th CDC, 1939--1944, 2017), namely accelerated linesearch algorithms requiring simple oracle gradient. While including classical method, our theoretical results cover broader class provide convergence guarantees with possibly inexact computation mapping. These findings have also significant practical impact, as they widen scope performance existing, future, general purpose software that invoke PANOC inner solver.
منابع مشابه
Local strong convexity and local Lipschitz continuity of the gradient of convex functions
Given a pair of convex conjugate functions f and f∗, we investigate the relationship between local Lipschitz continuity of ∇f and local strong convexity properties of f∗.
متن کاملOn Perturbed Proximal Gradient Algorithms
We study a version of the proximal gradient algorithm for which the gradient is intractable and is approximated by Monte Carlo methods (and in particular Markov Chain Monte Carlo). We derive conditions on the step size and the Monte Carlo batch size under which convergence is guaranteed: both increasing batch size and constant batch size are considered. We also derive non-asymptotic bounds for ...
متن کاملOn Stochastic Proximal Gradient Algorithms
We study a perturbed version of the proximal gradient algorithm for which the gradient is not known in closed form and should be approximated. We address the convergence and derive a non-asymptotic bound on the convergence rate for the perturbed proximal gradient, a perturbed averaged version of the proximal gradient algorithm and a perturbed version of the fast iterative shrinkagethresholding ...
متن کاملA New Perspective of Proximal Gradient Algorithms
We provide a new perspective to understand proximal gradient algorithms. We show that both proximal gradient algorithm (PGA) and Bregman proximal gradient algorithm (BPGA) can be viewed as generalized proximal point algorithm (GPPA), based on which more accurate convergence rates of PGA and BPGA are obtained directly. Furthermore, based on GPPA framework, we incorporate the back-tracking line s...
متن کاملProximal Gradient Temporal Difference Learning Algorithms
In this paper, we describe proximal gradient temporal difference learning, which provides a principled way for designing and analyzing true stochastic gradient temporal difference learning algorithms. We show how gradient TD (GTD) reinforcement learning methods can be formally derived, not with respect to their original objective functions as previously attempted, but rather with respect to pri...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Optimization Theory and Applications
سال: 2022
ISSN: ['0022-3239', '1573-2878']
DOI: https://doi.org/10.1007/s10957-022-02048-5